- Title
- Cost-effective testing of a deep learning model through input reduction
- Creator
- Zhou, Jianyi; Li, Feng; Dong, Jinhao; Zhang, Hongyu; Hao, Dan
- Relation
- 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE). Proceedings of 2020 IEEE 31st International Symposium on Software Reliability Engineering (ISSRE) (Coimbra, Portugal 12-15 October, 2020) p. 289-300
- Relation
- ARC.DP200102940 http://purl.org/au-research/grants/arc/DP200102940
- Publisher Link
- http://dx.doi.org/10.1109/ISSRE5003.2020.00035
- Publisher
- Institute of Electrical and Electronics Engineers (IEEE)
- Resource Type
- conference paper
- Date
- 2020
- Description
- With the increasing adoption of Deep Learning (DL) models in various applications, testing DL models is vitally important. However, testing DL models is costly and expensive, e.g., manual labelling is widely-recognized to be costly. To reduce testing cost, we propose to select only a subset of testing data, which is small but representative enough for a quick estimation of the performance of DL models. Our approach, DeepReduce, adopts a two-phase strategy. At first, our approach selects testing data for the purpose of satisfying testing adequacy. Then, it selects more testing data to approximate the distribution between the whole testing data and the selected data by leveraging relative entropy minimization. We evaluate DeepReduce on four widely-used datasets (with 15 models in total). We find that DeepReduce reduces the whole testing data to 7.5% on average and can reliably estimate the performance of DL models.
- Subject
- deep learning; senstivity analysis; estimation; data models; software; software reliability; testing
- Identifier
- http://hdl.handle.net/1959.13/1450665
- Identifier
- uon:44006
- Identifier
- ISBN:9781728198712
- Identifier
- ISSN:1071-9458
- Language
- eng
- Reviewed
- Hits: 3529
- Visitors: 3505
- Downloads: 0
Thumbnail | File | Description | Size | Format |
---|